17 research outputs found
Computing with noise in spiking neural networks
Trial-to-trial variability is an ubiquitous characteristic in neural firing patterns and is
often regarded as a side-effect of intrinsic noise. Increasing evidence indicates that this
variability is a signature of network computation. The computational role of noise is not
yet clear and existing frameworks use abstract models for stochastic computation. In
this work, we use networks of spiking neurons to perform stochastic inference by sam-
pling. We provide a novel analytical description of the neural response function with an
unprecedented range of validity. This description enables an implementation of spiking
networks in simulations to sample from Boltzmann distributions. We show the robust-
ness of these networks to parameter variations and highlight the substantial advantages
of short-term plasticity in our framework. We demonstrate accelerated inference on neu-
romorphic hardware with a speed-up of 10^4 compared to biological networks, regardless
of network size. We further explore the role of noise as a computational component in
our sampling networks and identify the functional equivalence between synaptic connec-
tions and mutually shared noise. Based on this, we implement interconnected sampling
ensembles which exploit their activity as noise resource to maintain a stochastic firing
regime
Spiking neurons with short-term synaptic plasticity form superior generative networks
Spiking networks that perform probabilistic inference have been proposed both
as models of cortical computation and as candidates for solving problems in
machine learning. However, the evidence for spike-based computation being in
any way superior to non-spiking alternatives remains scarce. We propose that
short-term plasticity can provide spiking networks with distinct computational
advantages compared to their classical counterparts. In this work, we use
networks of leaky integrate-and-fire neurons that are trained to perform both
discriminative and generative tasks in their forward and backward information
processing paths, respectively. During training, the energy landscape
associated with their dynamics becomes highly diverse, with deep attractor
basins separated by high barriers. Classical algorithms solve this problem by
employing various tempering techniques, which are both computationally
demanding and require global state updates. We demonstrate how similar results
can be achieved in spiking networks endowed with local short-term synaptic
plasticity. Additionally, we discuss how these networks can even outperform
tempering-based approaches when the training data is imbalanced. We thereby
show how biologically inspired, local, spike-triggered synaptic dynamics based
simply on a limited pool of synaptic resources can allow spiking networks to
outperform their non-spiking relatives.Comment: corrected typo in abstrac
Stochasticity from function -- why the Bayesian brain may need no noise
An increasing body of evidence suggests that the trial-to-trial variability
of spiking activity in the brain is not mere noise, but rather the reflection
of a sampling-based encoding scheme for probabilistic computing. Since the
precise statistical properties of neural activity are important in this
context, many models assume an ad-hoc source of well-behaved, explicit noise,
either on the input or on the output side of single neuron dynamics, most often
assuming an independent Poisson process in either case. However, these
assumptions are somewhat problematic: neighboring neurons tend to share
receptive fields, rendering both their input and their output correlated; at
the same time, neurons are known to behave largely deterministically, as a
function of their membrane potential and conductance. We suggest that spiking
neural networks may, in fact, have no need for noise to perform sampling-based
Bayesian inference. We study analytically the effect of auto- and
cross-correlations in functionally Bayesian spiking networks and demonstrate
how their effect translates to synaptic interaction strengths, rendering them
controllable through synaptic plasticity. This allows even small ensembles of
interconnected deterministic spiking networks to simultaneously and
co-dependently shape their output activity through learning, enabling them to
perform complex Bayesian computation without any need for noise, which we
demonstrate in silico, both in classical simulation and in neuromorphic
emulation. These results close a gap between the abstract models and the
biology of functionally Bayesian spiking networks, effectively reducing the
architectural constraints imposed on physical neural substrates required to
perform probabilistic computing, be they biological or artificial
The high-conductance state enables neural sampling in networks of LIF neurons
The apparent stochasticity of in-vivo neural circuits has long been
hypothesized to represent a signature of ongoing stochastic inference in the
brain. More recently, a theoretical framework for neural sampling has been
proposed, which explains how sample-based inference can be performed by
networks of spiking neurons. One particular requirement of this approach is
that the neural response function closely follows a logistic curve.
Analytical approaches to calculating neural response functions have been the
subject of many theoretical studies. In order to make the problem tractable,
particular assumptions regarding the neural or synaptic parameters are usually
made. However, biologically significant activity regimes exist which are not
covered by these approaches: Under strong synaptic bombardment, as is often the
case in cortex, the neuron is shifted into a high-conductance state (HCS)
characterized by a small membrane time constant. In this regime, synaptic time
constants and refractory periods dominate membrane dynamics.
The core idea of our approach is to separately consider two different "modes"
of spiking dynamics: burst spiking and transient quiescence, in which the
neuron does not spike for longer periods. We treat the former by propagating
the PDF of the effective membrane potential from spike to spike within a burst,
while using a diffusion approximation for the latter. We find that our
prediction of the neural response function closely matches simulation data.
Moreover, in the HCS scenario, we show that the neural response function
becomes symmetric and can be well approximated by a logistic function, thereby
providing the correct dynamics in order to perform neural sampling. We hereby
provide not only a normative framework for Bayesian inference in cortex, but
also powerful applications of low-power, accelerated neuromorphic systems to
relevant machine learning tasks.Comment: 3 pages, 1 figur
Pattern representation and recognition with accelerated analog neuromorphic systems
Despite being originally inspired by the central nervous system, artificial
neural networks have diverged from their biological archetypes as they have
been remodeled to fit particular tasks. In this paper, we review several
possibilites to reverse map these architectures to biologically more realistic
spiking networks with the aim of emulating them on fast, low-power neuromorphic
hardware. Since many of these devices employ analog components, which cannot be
perfectly controlled, finding ways to compensate for the resulting effects
represents a key challenge. Here, we discuss three different strategies to
address this problem: the addition of auxiliary network components for
stabilizing activity, the utilization of inherently robust architectures and a
training method for hardware-emulated networks that functions without perfect
knowledge of the system's dynamics and parameters. For all three scenarios, we
corroborate our theoretical considerations with experimental results on
accelerated analog neuromorphic platforms.Comment: accepted at ISCAS 201
26th Annual Computational Neuroscience Meeting (CNS*2017): Part 3 - Meeting Abstracts - Antwerp, Belgium. 15–20 July 2017
This work was produced as part of the activities of FAPESP Research,\ud
Disseminations and Innovation Center for Neuromathematics (grant\ud
2013/07699-0, S. Paulo Research Foundation). NLK is supported by a\ud
FAPESP postdoctoral fellowship (grant 2016/03855-5). ACR is partially\ud
supported by a CNPq fellowship (grant 306251/2014-0)
25th annual computational neuroscience meeting: CNS-2016
The same neuron may play different functional roles in the neural circuits to which it belongs. For example, neurons in the Tritonia pedal ganglia may participate in variable phases of the swim motor rhythms [1]. While such neuronal functional variability is likely to play a major role the delivery of the functionality of neural systems, it is difficult to study it in most nervous systems. We work on the pyloric rhythm network of the crustacean stomatogastric ganglion (STG) [2]. Typically network models of the STG treat neurons of the same functional type as a single model neuron (e.g. PD neurons), assuming the same conductance parameters for these neurons and implying their synchronous firing [3, 4]. However, simultaneous recording of PD neurons shows differences between the timings of spikes of these neurons. This may indicate functional variability of these neurons. Here we modelled separately the two PD neurons of the STG in a multi-neuron model of the pyloric network. Our neuron models comply with known correlations between conductance parameters of ionic currents. Our results reproduce the experimental finding of increasing spike time distance between spikes originating from the two model PD neurons during their synchronised burst phase. The PD neuron with the larger calcium conductance generates its spikes before the other PD neuron. Larger potassium conductance values in the follower neuron imply longer delays between spikes, see Fig. 17.Neuromodulators change the conductance parameters of neurons and maintain the ratios of these parameters [5]. Our results show that such changes may shift the individual contribution of two PD neurons to the PD-phase of the pyloric rhythm altering their functionality within this rhythm. Our work paves the way towards an accessible experimental and computational framework for the analysis of the mechanisms and impact of functional variability of neurons within the neural circuits to which they belong